GP-NeRF: Generalized Perception NeRF for Context-Aware 3D Scene Understanding

👤 Hao Li, Dingwen Zhang, Yalun Dai, Nian Liu, Lechao Cheng, Jingfeng Li, Jingdong Wang, Junwei Han
📅 November 2023
CVPR 2023 Conference paper

Abstract

Applying NeRF to downstream perception tasks for scene understanding and representation is becoming increasingly popular. Most existing methods treat semantic prediction as an additional rendering task, i.e., the "label rendering" task, to build semantic NeRFs. However, by rendering semantic/instance labels per pixel without considering the contextual information of the rendered image, these methods usually suffer from unclear boundary segmentation and abnormal segmentation of pixels within an object.

To solve this problem, we propose Generalized Perception NeRF (GP-NeRF), a novel pipeline that makes the widely used segmentation model and NeRF work compatibly under a unified framework, for facilitating context-aware 3D scene perception.

Methodology

To accomplish this goal, we introduce transformers to aggregate radiance as well as semantic embedding fields jointly for novel views and facilitate the joint volumetric rendering of both fields.

In addition, we propose two self-distillation mechanisms:

1. Semantic Distill Loss: To enhance the discrimination and quality of the semantic field.

2. Depth-Guided Semantic Distill Loss: To maintain geometric consistency across the 3D scene understanding process.

Experimental Results

In evaluation, we conduct experimental comparisons under two perception tasks (i.e., semantic segmentation and instance segmentation) using both synthetic and real-world datasets.

Notably, our method outperforms SOTA approaches by:

6.94% on generalized semantic segmentation
11.76% on finetuning semantic segmentation
8.47% on instance segmentation

These significant improvements demonstrate the effectiveness of our context-aware approach in 3D scene understanding.

Keywords: Deep Learning NeRF Semantic Segmentation 3D vision CVPR

📚 Cite This Work

Choose how you would like to access the BibTeX citation: